21,297 research outputs found

    Distributed Approximation Algorithms for Weighted Shortest Paths

    Full text link
    A distributed network is modeled by a graph having nn nodes (processors) and diameter DD. We study the time complexity of approximating {\em weighted} (undirected) shortest paths on distributed networks with a O(log⁥n)O(\log n) {\em bandwidth restriction} on edges (the standard synchronous \congest model). The question whether approximation algorithms help speed up the shortest paths (more precisely distance computation) was raised since at least 2004 by Elkin (SIGACT News 2004). The unweighted case of this problem is well-understood while its weighted counterpart is fundamental problem in the area of distributed approximation algorithms and remains widely open. We present new algorithms for computing both single-source shortest paths (\sssp) and all-pairs shortest paths (\apsp) in the weighted case. Our main result is an algorithm for \sssp. Previous results are the classic O(n)O(n)-time Bellman-Ford algorithm and an O~(n1/2+1/2k+D)\tilde O(n^{1/2+1/2k}+D)-time (8k⌈log⁥(k+1)⌉−1)(8k\lceil \log (k+1) \rceil -1)-approximation algorithm, for any integer k≄1k\geq 1, which follows from the result of Lenzen and Patt-Shamir (STOC 2013). (Note that Lenzen and Patt-Shamir in fact solve a harder problem, and we use O~(⋅)\tilde O(\cdot) to hide the O(\poly\log n) term.) We present an O~(n1/2D1/4+D)\tilde O(n^{1/2}D^{1/4}+D)-time (1+o(1))(1+o(1))-approximation algorithm for \sssp. This algorithm is {\em sublinear-time} as long as DD is sublinear, thus yielding a sublinear-time algorithm with almost optimal solution. When DD is small, our running time matches the lower bound of Ω~(n1/2+D)\tilde \Omega(n^{1/2}+D) by Das Sarma et al. (SICOMP 2012), which holds even when D=Θ(log⁥n)D=\Theta(\log n), up to a \poly\log n factor.Comment: Full version of STOC 201

    Two-Bit Messages are Sufficient to Implement Atomic Read/Write Registers in Crash-prone Systems

    Get PDF
    Atomic registers are certainly the most basic objects of computing science. Their implementation on top of an n-process asynchronous message-passing system has received a lot of attention. It has been shown that t \textless{} n/2 (where t is the maximal number of processes that may crash) is a necessary and sufficient requirement to build an atomic register on top of a crash-prone asynchronous message-passing system. Considering such a context, this paper presents an algorithm which implements a single-writer multi-reader atomic register with four message types only, and where no message needs to carry control information in addition to its type. Hence, two bits are sufficient to capture all the control information carried by all the implementation messages. Moreover, the messages of two types need to carry a data value while the messages of the two other types carry no value at all. As far as we know, this algorithm is the first with such an optimality property on the size of control information carried by messages. It is also particularly efficient from a time complexity point of view

    Stochastic delocalization of finite populations

    Full text link
    Heterogeneities in environmental conditions often induce corresponding heterogeneities in the distribution of species. In the extreme case of a localized patch of increased growth rates, reproducing populations can become strongly concentrated at the patch despite the entropic tendency for population to distribute evenly. Several deterministic mathematical models have been used to characterize the conditions under which localized states can form, and how they break down due to convective driving forces. Here, we study the delocalization of a finite population in the presence of number fluctuations. We find that any finite population delocalizes on sufficiently long time scales. Depending on parameters, however, populations may remain localized for a very long time. The typical waiting time to delocalization increases exponentially with both population size and distance to the critical wind speed of the deterministic approximation. We augment these simulation results by a mathematical analysis that treats the reproduction and migration of individuals as branching random walks subject to global constraints. For a particular constraint, different from a fixed population size constraint, this model yields a solvable first moment equation. We find that this solvable model approximates very well the fixed population size model for large populations, but starts to deviate as population sizes are small. The analytical approach allows us to map out a phase diagram of the order parameter as a function of the two driving parameters, inverse population size and wind speed. Our results may be used to extend the analysis of delocalization transitions to different settings, such as the viral quasi-species scenario

    Dynamical and spectral properties of complex networks

    Full text link
    Dynamical properties of complex networks are related to the spectral properties of the Laplacian matrix that describes the pattern of connectivity of the network. In particular we compute the synchronization time for different types of networks and different dynamics. We show that the main dependence of the synchronization time is on the smallest nonzero eigenvalue of the Laplacian matrix, in contrast to other proposals in terms of the spectrum of the adjacency matrix. Then, this topological property becomes the most relevant for the dynamics.Comment: 14 pages, 5 figures, to be published in New Journal of Physic

    Reasoning about goal-directed real-time teleo-reactive programs

    Get PDF
    The teleo-reactive programming model is a high-level approach to developing real-time systems that supports hierarchical composition and durative actions. The model is different from frameworks such as action systems, timed automata and TLA+, and allows programs to be more compact and descriptive of their intended behaviour. Teleo-reactive programs are particularly useful for implementing controllers for autonomous agents that must react robustly to their dynamically changing environments. In this paper, we develop a real-time logic that is based on Duration Calculus and use this logic to formalise the semantics of teleo-reactive programs. We develop rely/guarantee rules that facilitate reasoning about a program and its environment in a compositional manner. We present several theorems for simplifying proofs of teleo-reactive programs and present a partially mechanised method for proving progress properties of goal-directed agents. © 2013 British Computer Society

    RADON: Repairable Atomic Data Object in Networks

    Get PDF
    Erasure codes offer an efficient way to decrease storage and communication costs while implementing atomic memory service in asynchronous distributed storage systems. In this paper, we provide erasure-code-based algorithms having the additional ability to perform background repair of crashed nodes. A repair operation of a node in the crashed state is triggered externally, and is carried out by the concerned node via message exchanges with other active nodes in the system. Upon completion of repair, the node re-enters active state, and resumes participation in ongoing and future read, write, and repair operations. To guarantee liveness and atomicity simultaneously, existing works assume either the presence of nodes with stable storage, or presence of nodes that never crash during the execution. We demand neither of these; instead we consider a natural, yet practical network stability condition N1 that only restricts the number of nodes in the crashed/repair state during broadcast of any message. We present an erasure-code based algorithm RADON_{C} that is always live, and guarantees atomicity as long as condition N1 holds. In situations when the number of concurrent writes is limited, RADON_{C} has significantly improved storage and communication cost over a replication-based algorithm RADON_{R}, which also works under N1. We further show how a slightly stronger network stability condition N2 can be used to construct algorithms that never violate atomicity. The guarantee of atomicity comes at the expense of having an additional phase during the read and write operations

    How Long It Takes for an Ordinary Node with an Ordinary ID to Output?

    Full text link
    In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. The node-averaged time-complexity of a distributed algorithm on a given instance is defined as the average, taken over every node of the instance, of the number of rounds before that node output. We compare the node-averaged time-complexity with the classical one in the standard LOCAL model for distributed network computing. We show that there can be an exponential gap between the node-averaged time-complexity and the classical time-complexity, as witnessed by, e.g., leader election. Our first main result is a positive one, stating that, in fact, the two time-complexities behave the same for a large class of problems on very sparse graphs. In particular, we show that, for LCL problems on cycles, the node-averaged time complexity is of the same order of magnitude as the slowest node time-complexity. In addition, in the LOCAL model, the time-complexity is computed as a worst case over all possible identity assignments to the nodes of the network. In this paper, we also investigate the ID-averaged time-complexity, when the number of rounds is averaged over all possible identity assignments. Our second main result is that the ID-averaged time-complexity is essentially the same as the expected time-complexity of randomized algorithms (where the expectation is taken over all possible random bits used by the nodes, and the number of rounds is measured for the worst-case identity assignment). Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio

    Extending the theory of Owicki and Gries with a logic of progress

    Get PDF
    This paper describes a logic of progress for concurrent programs. The logic is based on that of UNITY, molded to fit a sequential programming model. Integration of the two is achieved by using auxiliary variables in a systematic way that incorporates program counters into the program text. The rules for progress in UNITY are then modified to suit this new system. This modification is however subtle enough to allow the theory of Owicki and Gries to be used without change

    From ultraviolet to Prussian blue: a spectral response for the cyanotype process and a safe educational activity to explain UV exposure for all ages

    Get PDF
    Engaging students and the public in understanding UV radiation and its effects is achievable using the real time experiment that incorporates blueprint paper, an 'educational toy' that is a safe and easy demonstration of the cyanotype chemical process. The cyanotype process works through the presence of UV radiation. The blueprint paper was investigated to be used as not only engagement in discussion for public outreach about UV radiation, but also as a practical way to introduce the exploration of measurement of UV radiation exposure and as a consequence, digital image analysis. Tests of print methods and experiments, dose response, spectral response and dark response were investigated. Two methods of image analysis for dose response calculation are provided using easy to access software and two methods of pixel count analysis were used to determine spectral response characteristics. Variation in manufacture of the blueprint paper product indicates some variance between measurements. Most importantly, as a result of this investigation, a preliminary spectral response range for the radiation required to produce the cyanotype reaction is presented here, which has until now been unknown

    The prediction of future from the past: an old problem from a modern perspective

    Full text link
    The idea of predicting the future from the knowledge of the past is quite natural when dealing with systems whose equations of motion are not known. Such a long-standing issue is revisited in the light of modern ergodic theory of dynamical systems and becomes particularly interesting from a pedagogical perspective due to its close link with Poincar\'e's recurrence. Using such a connection, a very general result of ergodic theory - Kac's lemma - can be used to establish the intrinsic limitations to the possibility of predicting the future from the past. In spite of a naive expectation, predictability results to be hindered rather by the effective number of degrees of freedom of a system than by the presence of chaos. If the effective number of degrees of freedom becomes large enough, regardless the regular or chaotic nature of the system, predictions turn out to be practically impossible. The discussion of these issues is illustrated with the help of the numerical study of simple models.Comment: 9 pages, 4 figure
    • 

    corecore